87 resultados para Cumputer algorithms

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with the problem of automatic inspection of metallic surface using machine vision. An experimental system has been developed to take images of external metallic surfaces and an intelligent approach based on morphology and genetic algorithms is proposed to detect structural defects on bumpy metallic surfaces. The approach employs genetic algorithms to automatically learn morphology processing parameters such as structuring elements and defect segmentation threshold. This paper describes the detailed procedures which include encoding scheme, genetic operation and evaluation function.

The proposed method has been implemented and tested on a number of metallic surfaces. The results suggest that the method can provide an accurate identification to the defects and can be developed into a viable commercial visual inspection system.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A question frequently asked in multi-agent systems (MASs) concerns the efficient search for suitable agents to solve a specific problem. To answer this question, different types of middle agents are usually employed. The performance of middle agents relies heavily on the matchmaking algorithms used. Matchmaking is the process of finding an appropriate provider for a requester through a middle agent. There has been substantial work on matchmaking in different kinds of middle agents. To our knowledge, almost all currently used matchmaking algorithms missed one point when doing matchmaking -- the matchmaking is only based on the advertised capabilities of provider agents. The actual performance of provider agents in accomplishing delegated tasks is not considered at all. This results in the inaccuracy of the matchmaking outcomes as well as the random selection of provider agents with the same advertised capabilities. The quality of service of different service provider agents varies from one agent to another even though they claimed they have the same capabilities. To this end, it is argued that the practical performance of service provider agents has a significant impact on the matchmaking outcomes of middle agents. An improvement to matchmaking algorithms is proposed, which makes the algorithms have the ability to consider the track records of agents in accomplishing delegated tasks. How to represent, accumulate, and use track records as well as how to give initial values for track records in the algorithm are discussed. A prototype is also built to verify the algorithm. Based on the improved algorithm, the matchmaking outcomes are more accurate and reasonable.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the appropriateness for application to large data sets of standard machine learning algorithms, which were mainly developed in the context of small data sets. Sampling and parallelisation have proved useful means for reducing computation time when learning from large data sets. However, such methods assume that algorithms that were designed for use with what are now considered small data sets are also fundamentally suitable for large data sets. It is plausible that optimal learning from large data sets requires a different type of algorithm to optimal learning from small data sets. This paper investigates one respect in which data set size may affect the requirements of a learning algorithm — the bias plus variance decomposition of classification error. Experiments show that learning from large data sets may be more effective when using an algorithm that places greater emphasis on bias management, rather than variance management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a hyperlink-based web page similarity measurement and two matrix-based hierarchical web page clustering algorithms. The web page similarity measurement incorporates hyperlink transitivity and page importance within the concerned web page space. One clustering algorithm takes cluster overlapping into account, another one does not. These algorithxms do not require predefined similarity thresholds for clustering, and are independent of the page order. The primary evaluations show the effectiveness of the proposed algorithms in clustering improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the experiment results of three adaptive equalization algorithms: least-mean-square (LMS) algorithm, discrete cosine transform-least mean square (DCT-LMS) algorithm, and recursive least square (RLS) algorithm. Based on the experiments, we obtained that the convergence rate of LMS is slow; the convergence rate of RLS is great faster while the computational price is expensive; the performance of that two parameters of DCT-LMS are between the previous two algorithms, but still not good enough. Therefore we will propose an algorithm based on H2 in a coming paper to solve the problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selecting a set of features which is optimal for a given task is the problem which plays an important role in a wide variety of contexts including pattern recognition, images understanding and machine learning. The concept of reduction of the decision table based on the rough set is very useful for feature selection. In this paper, a genetic algorithm based approach is presented to search the relative reduct decision table of the rough set. This approach has the ability to accommodate multiple criteria such as accuracy and cost of classification into the feature selection process and finds the effective feature subset for texture classification . On the basis of the effective feature subset selected, this paper presents a method to extract the objects which are higher than their surroundings, such as trees or forest, in the color aerial images. The experiments results show that the feature subset selected and the method of the object extraction presented in this paper are practical and effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Australian automotive component company plans to assemble and deliver seats to customer on just-in-time basis. The company management has decided to model operations of the seat plant to help them make decisions on capital investment and labour requirements. There are four different areas in seat assembly and delivery areas. Each area is modeled independently to optimise its operations. All four areas are then combined into one model called the plant model to model operations of seat plant from assembly to delivery. Discrete event simulation software is used to model the assembly operations of seat plant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining refers to extracting or "mining" knowledge from large amounts of data. It is also called a method of "knowledge presentation" where visualization and knowledge representation techniques are used to present the mined knowledge to the user. Efficient algorithms to mine frequent patterns are crucial to many tasks in data mining. Since the Apriori algorithm was proposed in 1994, there have been several methods proposed to improve its performance. However, most still adopt its candidate set generation-and-test approach. In addition, many methods do not generate all frequent patterns, making them inadequate to derive association rules. The Pattern Decomposition (PD) algorithm that can significantly reduce the size of the dataset on each pass makes it more efficient to mine all frequent patterns in a large dataset. This algorithm avoids the costly process of candidate set generation and saves a large amount of counting time to evaluate support with reduced datasets. In this paper, some existing frequent pattern generation algorithms are explored and their comparisons are discussed. The results show that the PD algorithm outperforms an improved version of Apriori named Direct Count of candidates & Prune transactions (DCP) by one order of magnitude and is faster than an improved FP-tree named as Predictive Item Pruning (PIP). Further, PD is also more scalable than both DCP and PIP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly, replicated anycast servers are being used to deliver network applications and service ever increasing user requests. Therefore, the strategies used to guarantee network bandwidth prerequisites and perform load balancing across the nodes of an anycast group are critical to the performance of online applications. In this paper, we model user requests, network congestion and latency, and server load using a combination of hydro-dynamics and queuing theory to develop an efficient job distribution strategy. Current, anycast research does not explicitly consider the system load of nodes within an anycast groups when distributing requests. Therefore, the performance of a heavily loaded anycast system can quickly become congested and uneven as jobs are routed to closely linked nodes which are already saturated with requests. In comparison, the nodes of further away systems remain relatively unused because of other issues such as network bandwidth and latency during these times. Our system redirects requests from busy systems to the idle, remotely linked nodes, to process requests faster in spite of slower network access. Using an empirical study, we show this technique can improve request performance, and throughput with minimal network probing overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spam is commonly defined as unsolicited email messages, and the goal of spam categorization is to distinguish between spam and legitimate email messages. Spam used to be considered a mere nuisance, but due to the abundant amounts of spam being sent today, it has progressed from being a nuisance to becoming a major problem. Spam filtering is able to control the problem in a variety of ways. Many researches in spam filtering has been centred on the more sophisticated classifier-related issues. Currently,  machine learning for spam classification is an important research issue at present. Support Vector Machines (SVMs) are a new learning method and achieve substantial improvements over the currently preferred methods, and behave robustly whilst tackling a variety of different learning tasks. Due to its high dimensional input, fewer irrelevant features and high accuracy, the  SVMs are more important to researchers for categorizing spam. This paper explores and identifies the use of different learning algorithms for classifying spam and legitimate messages from e-mail. A comparative analysis among the filtering techniques has also been presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The asymmetric travelling salesman problem with replenishment arcs (RATSP), arising from work related to aircraft routing, is a generalisation of the well-known ATSP. In this paper, we introduce a polynomial size mixed-integer linear programming (MILP) formulation for the RATSP, and improve an existing exponential size ILP formulation of Zhu [The aircraft rotation problem, Ph.D. Thesis, Georgia Institute of Technology, Atlanta, 1994] by proposing two classes of stronger cuts. We present results that under certain conditions, these two classes of stronger cuts are facet-defining for the RATS polytope, and that ATSP facets can be lifted, to give RATSP facets. We implement our polyhedral findings and develop a Lagrangean relaxation (LR)-based branch-and-bound (BNB) algorithm for the RATSP, and compare this method with solving the polynomial size formulation using ILOG Cplex 9.0, using both randomly generated problems and aircraft routing problems. Finally we compare our methods with the existing method of Boland et al. [The asymmetric traveling salesman problem with replenishment arcs, European J. Oper. Res. 123 (2000) 408–427]. It turns out that both of our methods are much faster than that of Boland et al. [The asymmetric traveling salesman problem with replenishment arcs, European J. Oper. Res. 123 (2000) 408–427], and that the LR-based BNB method is more efficient for problems that resemble the aircraft rotation problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The peer-to-peer content distribution network (PCDN) is a hot topic recently, and it has a huge potential for massive data intensive applications on the Internet. One of the challenges in PCDN is routing for data sources and data deliveries. In this paper, we studied a type of network model which is formed by dynamic autonomy area, structured source servers and proxy servers. Based on this network model, we proposed a number of algorithms to address the routing and data delivery issues. According to the highly dynamics of the autonomy area, we established dynamic tree structure proliferation system routing, proxy routing and resource searching algorithms. The simulations results showed that the performance of the proposed network model and the algorithms are stable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spam is commonly defined as unsolicited email messages and the goal of spam filtering is to distinguish between spam and legitimate email messages. Much work has been done to filter spam from legitimate emails using machine learning algorithm and substantial performance has been achieved with some amount of false positive (FP) tradeoffs. In the case of spam detection FP problem is unacceptable sometimes. In this paper, an adaptive spam filtering model has been proposed based on Machine learning (ML) algorithms which will get better accuracy by reducing FP problems. This model consists of individual and combined filtering approach from existing well known ML algorithms. The proposed model considers both individual and collective output and analyzes them by an analyzer. A dynamic feature selection (DFS) technique also proposed in this paper for getting better accuracy.